|
Clock Watching In the mythological "good old days" when many FPGA designs were nothing more than simple state machines, clocking was simple. Your average design had a single ticker oscillating merrily away at single-digit megahertz. Skew was something you did to vegetables when barbecuing shish kabobs, gating was an activity that applied to upscale housing developments, false paths were something you only ran into while hiking, and derived clocks were the two-dollar wristwatches you gave away at tradeshows with your company logo on them. Now that the revolution has come, new alarms are starting to sound. Clocks are no longer to be taken for granted. The division and subdivision of time that creates the choreography of synchronous design is an elaborate symphony-on-silicon with a skilled designer as composer and arranger. While old-school FPGA work was an Fmax drag race with the singular goal of reaching the maximum frequency your device could muster, today's design is a trapeze act where subtle and sensitive timing adjustments mean the difference between success and failure. When it comes to clocking, ASIC design has always been a blank canvas. Anything was possible, and everything was attempted. Any number of clock lines could be routed to any number of destinations, and each clock could be subdivided, multiplied, re-phased, gated, inverted, or aligned. As a designer, you were free to create as many problems as you were willing to solve. ASIC timing analysis tools dutifully reported your progress, and buffering, re-placement and tuning would eventually lead to a solution where the data mostly arrived ahead of the clock edge. Messing with clocks was also a near panacea for power problems, so those trying to cut back on the juice generally spent a liberal portion of their design schedule putting tight controls on which flip-flops got flopped. [more]
|
All material
copyright © 2003-2005 techfocus media, inc. All rights reserved. |